RIVRIDIS
HOMEPRODUCTS
CONTACT

    Rivridis Assistant: Local LLM Agent Architecture and Implementation

    Rivridis Assistant is a local AI agent built for Windows that runs large language models (LLMs) entirely on the user’s machine. The system is designed for privacy, extensibility, and automation, allowing custom workflows without relying on cloud services.

    System Overview

    The architecture consists of three main components:

    1. Frontend: Built with PySide (Qt), providing a native Windows GUI for interacting with the assistant.
    2. Backend: Implemented in Python, handling model inference, task orchestration, and function execution.
    3. LLM Integration: Supports local models such as Mistral 7B, running through a combination of llama-cpp-python for local inference and the openai Python library for OpenAI-compatible functionality.

    The system is modular, enabling easy integration of new models and tools.


    Custom Function Calling Framework

    Instead of relying on orchestration libraries like LangChain, a lightweight function calling framework was developed. Its features include:

    • Task Execution: Functions can perform local PC operations, such as file management, running scripts, or querying system resources.
    • Input/Output Validation: Ensures correct data types and prevents unexpected errors.
    • Extensibility: New functions can be added simply by defining Python functions with specified inputs and outputs.

    This framework allows the LLM to interact safely with local resources while keeping the system lightweight and modular.

    LLM Integration

    Rivridis Assistant supports both local and remote models:

    • Local Models: Using Mistral 7B for inference via llama-cpp-python, allowing large models to run efficiently on local hardware.
    • OpenAI API: Optionally supports models through the openai Python library, enabling hybrid workflows.
    • Function Calling: The assistant can trigger local functions directly based on LLM outputs, enabling automation of complex tasks.

    Frontend and Workflow

    The GUI provides:

    • Input for natural language queries or commands
    • Display of LLM-generated outputs and results from executed functions
    • Logging of executed tasks for debugging and reproducibility

    The frontend communicates with the backend via Python function calls, ensuring low latency and a responsive experience.


    Key Design Decisions

    1. Local-first: All computation occurs on the user’s machine to protect sensitive data.
    2. Minimal Dependencies: Avoided heavy orchestration libraries to reduce overhead and improve transparency.
    3. Function Calling Framework: Provides a safe, extensible mechanism for LLMs to perform local tasks.
    4. Modular LLM Support: Any model compatible with llama-cpp-python or the OpenAI API can be integrated seamlessly.

    Conclusion

    Rivridis Assistant demonstrates a modular and extensible approach to building local LLM agents for desktop environments. By combining Python, PySide, local LLMs, and a custom function framework, it provides a foundation for secure, automation-focused AI tools that operate fully offline.